Goto

Collaborating Authors

 information commissioner


UK privacy watchdog opens inquiry into X over Grok AI sexual deepfakes

The Guardian

Information Commissioner's Office to investigate whether Elon Musk's companies have complied with data protection law The Information Commissioner's Office (ICO) has opened formal investigations into X and xAI over whether Elon Musk's companies have complied with data protection law after the Grok AI tool was used to generate sexual deepfake images without consent. The ICO said the reports raised "serious concerns" under UK data protection laws, such as whether "appropriate safeguards were built into Grok's design and deployment". William Malcolm, the executive director of regulatory risk and innovation at the ICO, said: "The reports about Grok raise deeply troubling questions about how people's personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this. "Losing control of personal data in this way can cause immediate and significant harm. This is particularly the case where children are involved." In a separate statement, the regulator Ofcom said it was not investigating xAI, which provides the standalone Grok app. Ofcom also said its investigation into X, the social network formerly known as Twitter on which users can interact with Grok, was still gathering evidence and warned that the inquiry could take months. The company has taken steps to address the issue and must be given a "full opportunity to make representations", Ofcom added. On why it was not investigating xAI, the statement said: "When we opened our investigation into X, we said we were assessing whether we should also investigate xAI, as the provider of the standalone Grok service.


Facial-recognition: How Sports Direct and Spar are using Chinese-made cameras to spot shoplifters

Daily Mail - Science & tech

Sports Direct, Spar, Budgens, Costcutter and Southern Co-op are now among the growing number of British retailers using a controversial Chinese state-owned facial-recognition system. The biometric cameras work by scanning the faces of shoppers so they can be checked against a database of suspected criminals. But they have been branded'Orwellian' and'unlawful' by critics, who claim that staff could add people to a secret'blacklist' without them knowing. So how does the facial-recognition system work, and which shops are already using it? Here, MailOnline breaks down everything you need to know about the controversial technology.


A Guide to ICO Audit : Artificial Intelligence (AI) Audits

#artificialintelligence

The report has been published by the Information Commissioner's Office (ICO) in May 2022. The ICO is the UK's independent body set up to uphold information rights. The Information Commissioner is of the opinion that audit has an important role to play when it comes to educating and assisting organisations to meet their obligations. The Commissioner's Office has the power to carry out investigations in the form of compulsory data protection audits but mostly conduct consensual audits under the provisions of s129 of the Data Protection Act. The report states that the benefits of AI are often outweighed by the risks it poses to the rights and freedoms of individuals.


How to survive as an AI ethicist

#artificialintelligence

To receive The Algorithm newsletter in your inbox every Monday, sign up here. It's never been more important for companies to ensure that their AI systems function safely, especially as new laws to hold them accountable kick in. The responsible AI teams they set up to do that are supposed to be a priority, but investment in it is still lagging behind. People working in the field suffer as a result, as I found in my latestpiece. Organizations place huge pressure on individuals to fix big, systemic problems without proper support, while they often face a near-constant barrage of aggressive criticism online.


Facial recognition cameras in Southern Co-Op stores are 'adding customers to watch-lists'

Daily Mail - Science & tech

Co-Op is facing a legal challenge to its'Orwellian' and'unlawful' use of facial recognition cameras. Privacy rights group Big Brother Watch claimed supermarket staff could add people to a secret'blacklist' without them knowing. But Co-Op says it is using the Facewatch system in shops with a history of crime, so it can protect its staff. Big Brother Watch said the independent grocery chain had installed the surveillance technology in 35 stores across Portsmouth, Bournemouth, Bristol, Brighton and Hove, Chichester, Southampton and London. It claimed staff could add individuals to a watch-list where their biometric information is kept for up to two years.


UK sets out proposals for new AI rulebook to unleash innovation and boost public trust in the technology

#artificialintelligence

New plans for regulating the use of artificial intelligence (AI) will be published today to help develop consistent rules to promote innovation in this groundbreaking technology and protect the public. It comes as the Data Protection and Digital Information Bill is introduced to Parliament which will transform the UK's data laws to boost innovation in technologies such as AI. The Bill will seize the benefits of Brexit to keep a high standard of protection for people's privacy and personal data while delivering around £1 billion in savings for businesses. Artificial Intelligence refers to machines which learn from data how to perform tasks normally performed by humans. For example, AI helps identify patterns in financial transactions that could indicate fraud and clinicians diagnose illnesses based on chest images.


Clearview AI ordered to delete personal data of UK residents

AIHub

The Information Commissioner's Office (ICO) in the UK has fined facial recognition database company Clearview AI Inc more than £7.5m for using images of people that were scraped from websites and social media. Clearview AI collected the data to create a global online database, with one of the resulting applications being facial recognition. Clearview AI have also been ordered to delete personal data they hold on UK residents, and to stop obtaining and using the personal data that is publicly available on the internet. The ICO is the UK's independent authority set up to uphold information rights in the public interest. This action follows an investigation that they carried out in conjunction with the Office of the Australian Information Commissioner (OAIC).


The walls are closing in on Clearview AI

MIT Technology Review

The ICO found that Clearview AI had been in breach of data protection laws, collected personal data without people's consent, and asked for additional information, such as photos, when people asked if they were in the database. It found that this may have "acted as a disincentive" for people who objected to their data being scraped. "The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable," said John Edwards, the UK's information commissioner, in a statement. Clearview AI boasts one of the world's largest databases of people's faces, with 20 billion images that it has scraped off the internet from publicly available sources, such as social media, without their consent.


Clearview AI fined in UK for Illegally storing Images

#artificialintelligence

The Information Commissioner's Office (ICO) of the UK fined United States-based facial recognition firm Clearview AI £7.5 million for illegally storing images. The much controversial company has been facing such issues for some time, and this new development is yet another hit for Clearview AI. This fine has been imposed on the company for its practice of collecting and storing images of citizens from social media platforms without their consent, which is a severe threat to privacy according to several countries. Moreover, the ICO has also ordered the US firm to remove UK citizens' data from its systems. According to the ICO, Clearview AI has stored more than 20 billion pictures of people in its database.

  clearview ai, ico, information commissioner
  Country:

UK fines Clearview AI £7.5M for scraping citizens' data

#artificialintelligence

Clearview AI has been fined £7.5 million by the UK's privacy watchdog for scraping the online data of citizens without their explicit consent. The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world. In November 2021, the UK's Information Commissioner's Office (ICO) imposed a potential fine of just over £17 million on Clearview AI. Today's announcement suggests Clearview AI got off relatively lightly.